"DeepMind is trying to combine deep learning and algorithms, creating the one algorithm to rule them all: a deep learning model that can learn how to emulate any algorithm, generating an algorithm-equivalent model that can work with real-world data."
"In October 2016, algorithms reacted to negative news headlines about Brexit negotiations by sending the pound down 6% against the dollar in under two minutes, before recovering almost immediately. Knowing which particular headline, or which particular algorithm, caused the crash is next to impossible. When one haywire algorithm started placing and cancelling orders that ate up 4% of all traffic in US stocks in October 2012, one commentator was moved to comment wryly that "the motive of the algorithm is still unclear"."
"Indeed, a recent working paper in the area of machine learning suggests that the simpler the algorithm, the more likely its outcome will further disadvantage already disadvantaged groups. In other words, our social relationships are complex, and our algorithms should be, too. But in the quest to streamline processes, they aren't always, and that can be a huge problem."
"Creating an algorithm that discriminates or shows bias isn't as hard as it might seem, however. As a first-year graduate student, my advisor asked me to create a machine-learning algorithm to analyze a survey sent to United States physics instructors about teaching computer programming in their courses."
""Could teaching be trumped by a learning machine? Are we beginning to glimpse the possibility of machines that teach themselves to teach? They learn what works, what doesn't and deliver ever better performance. We see the embryonic evidence for this in adaptive learning systems, that are truly algorithmic, and do use machine learning, to improve as they deliver. The more students they teach, the better they get. They even tech themselves. This is not science fiction. This is real AI, in real software, delivering real courses, in real institutions." - Donald Clark"
"Researchers have shown that given access to only an API, a way to remotely use software without having it on your computer, it's possible to reverse-engineer machine learning algorithms with up to 99% accuracy. In the real world, this would mean being able to steal AI products from companies like Microsoft and IBM, and use them for free. Small companies built around a single machine learning API could lose any competitive advantage."
"Computer scientists and machine learning researchers are tackling the pandemic the way they know how: compiling datasets and building algorithms to learn from them."
"This type of approach can speed up learning times and improve the efficiency of algorithms, says Max Jaderberg at Google's AI company DeepMind. The company used a similar technique last year to teach an AI to explore a virtual maze. Its algorithm learned much more quickly than conventional reinforcement learning approaches. "Our agent is far quicker and requires a lot less experience from the world to train, making it much more data efficient," he says."
"In a paper titled "Towards Causal Representation Learning," researchers at the Max Planck Institute for Intelligent Systems, the Montreal Institute for Learning Algorithms (Mila), and Google Research discuss the challenges arising from the lack of causal representations in machine learning models and provide directions for creating artificial intelligence systems that can learn causal representations."
"The founders of Predictim want to be clear with me: Their product-an algorithm that scans the online footprint of a prospective babysitter to determine their "risk" levels for parents-is not racist. It is not biased.
"We take ethics and bias extremely seriously," Sal Parsa, Predictim's CEO, tells me warily over the phone. "In fact, in the last 18 months we trained our product, our machine, our algorithm to make sure it was ethical and not biased. We took sensitive attributes, protected classes, sex, gender, race, away from our training set. We continuously audit our model. And on top of that we added a human review process.""
"Researchers fed these algorithms (which function like autocomplete, but for images) pictures of a man cropped below his neck: 43% of the time the image was autocompleted with the man wearing a suit. When you fed the same algorithm a similarly cropped photo of a woman, it auto-completed her wearing a low-cut top or bikini a massive 53% of the time. For some reason, the researchers gave the algorithm a picture of the Democratic congresswoman Alexandria Ocasio-Cortez and found that it also automatically generated an image of her in a bikini. (After ethical concerns were raised on Twitter, the researchers had the computer-generated image of AOC in a swimsuit removed from the research paper.)"
"What you may not realize, though, is that machine learning is already all around you, and it can exert a surprising degree of influence over your life. Don't believe me? You might be surprised."
""This is the first time such machine learning tools have been used in this context," says Fluri, "and we found that the deep artificial neural network enables us to extract more information from the data than previous approaches. We believe that this usage of machine learning in cosmology will have many future applications.""
"How that effects systems of governance has yet to be fully explored, but there are cautions. "Algorithms are only as good as the data on which they are based, and the problem with current AI is that it was trained on data that was incomplete or unrepresentative and the risk of bias or unfairness is quite substantial," says West.
The fairness and equity of algorithms are only as good as the data-programming that underlie them. "For the last few decades we've allowed the tech companies to decide, so we need better guardrails and to make sure the algorithms respect human values," West says. "We need more oversight.""
"It's not that Google wants to do this, it's that they didn't anticipate this outcome, and compounded that omission by likewise omitting a way to overrule the algorithm's judgment. As with other examples of algorithmic cruelty, it's not so much this specific example as was it presages for a future in which more and more of our external reality is determined by models derived from machine learning systems whose workings we're not privy to and have no say in. "
""As expected, we found evidence of a performance improvement over generations due to social learning," the researchers wrote. "Adding an algorithm with a different problem-solving bias than humans temporarily improved human performance but improvements were not sustained in following generations. While humans did copy solutions from the algorithm, they appeared to do so at a lower rate than they copied other humans' solutions with comparable performance."
Brinkmann told Motherboard that while they were surprised superior solutions weren't more commonly adopted, this was in line with other research suggesting human biases in decision-making persist despite social learning. Still, the team is optimistic that future research can yield insight into how to amend this."
"The algorithmic manager seems to watch everything you do. Ride-hailing platforms track a variety of personalized statistics, including ride acceptance rates, cancellation rates, hours spent logged in to the app and trips completed. And they display selected statistics to individual drivers as motivating tools, like "You're in the top 10 percent of partners!"
Uber uses the accelerometer in drivers' phones along with GPS and gyroscope to give them safe driving reports, tracking their performance in granular detail. One driver posted to a forum that a grade of 210 out of 247 "smooth accelerations" earned a "Great work!" from the boss."
"The streaming video company's recommendation algorithm can sometimes send you on an hours-long video binge so captivating that you never notice the time passing. But according to a study from software nonprofit Mozilla Foundation, trusting the algorithm means you're actually more likely to see videos featuring sexualized content and false claims than personalized interests."
"But users began to spot flaws in the feature over the weekend. The first to highlight the issue was PhD student Colin Madland, who discovered the issue while highlighting a different racial bias in the video-conference software Zoom.
When Madland, who is white, posted an image of himself and a black colleague who had been erased from a Zoom call after its algorithm failed to recognise his face, Twitter automatically cropped the image to only show Madland."